Skip to content

feat(restheart-mongo): keploy compat lane sample + Java line coverage gate#134

Open
AkashKumar7902 wants to merge 11 commits intomainfrom
feat/restheart-mongo-sample
Open

feat(restheart-mongo): keploy compat lane sample + Java line coverage gate#134
AkashKumar7902 wants to merge 11 commits intomainfrom
feat/restheart-mongo-sample

Conversation

@AkashKumar7902
Copy link
Copy Markdown

@AkashKumar7902 AkashKumar7902 commented May 1, 2026

Summary

Adds a new restheart-mongo/ sample that owns end-to-end orchestration (compose / bootstrap / traffic / noise filter / coverage) for the RESTHeart 9.x + MongoDB 7 compat lane. The keploy/enterprise CI lane consumes it as a thin wrapper.

The sample drives the full RESTHeart REST surface keploy needs to gate on a record/replay round-trip: REST CRUD on /{db}/{coll} + _aggrs/{name} + _size + _meta + _indexes + _streams/{name}, GraphQL apps (/graphql, /graphql/{appname}), files binary, ACL, users, OAuth metadata, sessions+transactions, metrics, and /ic / /csv services — 134 distinct (method, route) tuples in restheart_record_traffic.

Layout

restheart-mongo/
├── Dockerfile               # FROM softinstigate/restheart:9.2.1 (base; uninstrumented)
├── Dockerfile.coverage      # multi-stage: alpine builder pulls JaCoCo, layered onto distroless restheart
├── docker-compose.yml       # mongo:7 + restheart on a fixed subnet, env-driven
├── docker-compose.coverage.yml  # overlay; arms JaCoCo via JAVA_TOOL_OPTIONS
├── flow.sh                  # bootstrap | record-traffic | coverage
├── keploy.yml.template      # globalNoise for _etag/_oid/lastModified/Date
└── README.md                # contract + run modes

Coverage architecture

Real Java line coverage via JaCoCo 0.8.13, validated locally end-to-end at 52.3% (1663/3182 lines) on the existing record-traffic surface (INSTRUCTION coverage 50.8%).

Critical: the base Dockerfile and docker-compose.yml are untouched. Coverage instrumentation lives in a separate Dockerfile.coverage + docker-compose.coverage.yml overlay, applied only by the standalone GH Actions coverage workflow. The keploy/enterprise compat lane consumes the base compose unchanged and pays zero JaCoCo cost (the agent rewrites bytecode at class-load and adds ~5-10% per-call overhead that would slow record/replay).

How it works:

  1. Dockerfile.coverage is a multi-stage build: stage 1 (alpine builder) fetches the JaCoCo zip and extracts jacocoagent.jar + jacococli.jar; stage 2 layers them into the upstream restheart image (which is distroless — no shell, so jars must be pulled in a builder stage and COPY --chown'd).
  2. The overlay sets JAVA_TOOL_OPTIONS=-javaagent:/opt/jacoco/jacocoagent.jar=output=tcpserver,address=0.0.0.0,port=6300,sessionid=keploy,append=false. The JVM reads this at startup and prepends the -javaagent flag, so the agent attaches before restheart's classes load.
  3. flow.sh coverage shells into a one-off coverage container to dump execution data over the agent's TCP server (no JVM stop needed) and renders an XML report against /opt/restheart/restheart.jar via jacococli report.
  4. The XML's <counter type="LINE" missed covered/> rows are parsed and emitted as Covered N/M (XX.X%).

flow.sh coverage against the base compose (no overlay) is a graceful no-op.

Bonus: base compose RHO bug fix

This PR also fixes a pre-existing config bug in the base docker-compose.yml's RHO env var:

  • RHO uses ; as the key->value separator (the upstream image's default). The prior YAML-folded version used , — which RESTHeart parsed as part of the connection-string value, ignoring the override entirely.
  • The override was missing /http-listener/host->"0.0.0.0"; without it, RESTHeart bound to its localhost default and was unreachable from the host port mapping.

Both are fixed in this PR.

Coverage gate

.github/workflows/restheart-mongo.yml triggers ONLY on changes under restheart-mongo/**. Build (PR HEAD) + release (base ref) coverage compared; PR fails if drop exceeds 1.0pp. Threshold overridable via vars.RESTHEART_COVERAGE_THRESHOLD.

Run modes

  • Smoke check (without keploy): docker compose up -d && bash flow.sh bootstrap 240 && bash flow.sh record-traffic — exactly what the keploy enterprise lane wraps.
  • Real coverage (without keploy): apply the overlay, run bootstrap + record-traffic + flow.sh coverage (no JVM stop needed).
  • With keploy: keploy/enterprise .woodpecker/restheart-linux.yml lane wraps docker compose up in keploy record / keploy test against the base compose.

See README for full commands.

Consumers

  • keploy/enterprise .woodpecker/restheart-linux.yml — three-cell record/replay matrix that delegates compose + bootstrap + traffic to this sample.

Test plan

  • docker compose up -d boots mongo + restheart cleanly; restheart returns 401 on / (auth required, listener up)
  • flow.sh bootstrap 240 PUTs the seed db + 11 collections + GraphQL apps + ACL + users
  • flow.sh record-traffic exercises 134 (method, route) tuples across the full surface
  • Coverage validated locally end-to-end: 52.3% line coverage of restheart.jar (1663/3182)
  • flow.sh coverage against base compose exits 0 cleanly with INFO message

Mirrors the doccano-django sample shape: the sample owns
orchestration (compose / bootstrap / traffic / coverage), keploy
CI lanes consume it as a thin wrapper.

This is a SCAFFOLD — the full traffic loop driven by the existing
keploy/enterprise lane (`compat_trigger_record_traffic` in
.ci/scripts/restheart-linux.sh, ~600 lines covering CRUD on
/<db>/<coll> + GraphQL + files + ACL + users + bulk + aggregations)
needs to be ported into flow.sh::restheart_record_traffic in a
follow-up. The current loop is deliberately minimal (CRUD on a
seed collection) which is enough to prove the sample boots
end-to-end without keploy.

Layout:
  Dockerfile             — pin to softinstigate/restheart:9.2.1
  docker-compose.yml     — mongo:7 + restheart:9.2.1, env-driven
  flow.sh                — bootstrap | record-traffic | coverage | list-routes
  keploy.yml.template    — globalNoise for _etag/_oid/lastModified/Date
  README.md              — handoff + status notes
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
Copilot AI review requested due to automatic review settings May 1, 2026 01:05
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new restheart-mongo/ sample scaffold to serve as a Keploy compat-lane “owned orchestration” fixture (compose + bootstrap + traffic + route coverage), intended to be consumed by CI lanes as a thin wrapper.

Changes:

  • Introduces a RESTHeart+MongoDB Docker Compose setup with env-driven names and static network settings.
  • Adds flow.sh to bootstrap RESTHeart, drive a minimal REST CRUD traffic loop, and compute route coverage from curated route patterns.
  • Adds docs and a Keploy noise template for non-deterministic fields.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
restheart-mongo/Dockerfile Pins the RESTHeart image version used by the sample.
restheart-mongo/docker-compose.yml Defines RESTHeart+Mongo services, networking, and health checks.
restheart-mongo/flow.sh Provides bootstrap/traffic/coverage/list-routes orchestration for the sample.
restheart-mongo/keploy.yml.template Adds a minimal Keploy globalNoise template for RESTHeart responses.
restheart-mongo/README.md Documents the scaffold status, contract, and local run instructions.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +146 to +156
local f method route
local found_keploy=0
while IFS= read -r f; do
found_keploy=1
method=$(awk '/^ method:/{print $2; exit}' "$f")
route=$(awk '/^ url:/{print $2; exit}' "$f")
route="${route%%\?*}"
case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac
if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi
done < <(find keploy -type f -path '*/tests/*.yaml' 2>/dev/null) | sort -u
if [ "$found_keploy" = "1" ]; then return 0; fi
Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +151 to +154
route=$(awk '/^ url:/{print $2; exit}' "$f")
route="${route%%\?*}"
case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac
if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi
Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +160 to +163
method="${line%% *}"; route="${line#* }"
route="${route%%\?*}"
case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac
[ -n "$method" ] && [ -n "$route" ] && echo "$method $route"
Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +73 to +79
restheart_wait_for_app "$timeout"

# Create the test database. PUT on /<db> is idempotent —
# 201 first time, 200 on subsequent runs.
curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}" || true
# Seed a collection so reads have something to find.
curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}/items" || true
Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +86 to +104
log_fired GET "$base/"
curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/" >/dev/null || true

log_fired GET "$base/${RESTHEART_DB}"
curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}" >/dev/null || true

log_fired GET "$base/${RESTHEART_DB}/items"
curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/items" >/dev/null || true

# Insert a document.
log_fired POST "$base/${RESTHEART_DB}/items"
curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST \
"$base/${RESTHEART_DB}/items" \
-d "{\"_id\":\"keploy-${RESTHEART_PHASE}\",\"name\":\"sample item\",\"score\":42}" >/dev/null || true

# Read it back.
log_fired GET "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}"
curl -sS -H "$RESTHEART_ADMIN_AUTH" \
"$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" >/dev/null || true
Comment on lines +39 to +44
networks:
restheart-net:
driver: bridge
ipam:
config:
- subnet: ${RESTHEART_NETWORK_SUBNET:-172.36.0.0/24}
Comment thread restheart-mongo/flow.sh Outdated
Comment on lines +41 to +44
# drive in record-traffic. Override RESTHEART_ADMIN_AUTH to add
# `Authorization: Basic <b64>` to authenticated calls when porting
# the full lane traffic.
RESTHEART_ADMIN_AUTH="${RESTHEART_ADMIN_AUTH:-Basic YWRtaW46c2VjcmV0}"
Replace the minimal record-traffic stub with the complete loop that
the keploy compat lane needs to gate. flow.sh::restheart_record_traffic
now drives the full RESTHeart 9.x surface end-to-end against bare
RESTHeart, and restheart_list_routes enumerates every (method, route)
tuple it fires so coverage stays in lockstep.

Covered surfaces:
- CRUD on /<db>/<coll> + /<db>/<coll>/<docid> (HAL, _size, _meta,
  _indexes, ETag conditional flow, writeMode insert/update/upsert,
  $-operator PATCH variety)
- Aggregations via _meta.aggrs with avars variable interpolation
  (scalars / arrays / nested / missing / malformed)
- Bulk writes (POST array body, filter PATCH, filter DELETE,
  larger 25-doc batches, mixed valid/invalid)
- GraphQL apps (gql-apps registration, query / mutation / fragment /
  alias / multi-op, BSON scalar coercion on outputs and inputs,
  introspection, error paths)
- Files / GridFS (.files buckets, multipart upload, binary download
  with Range requests, metadata fetch, delete)
- ACL rules (predicate evaluator across method / path-prefix /
  qparams-* / bson-request-* / equals[%U,...] / in[%h,...]) plus
  the mongo permission interceptors (readFilter, writeFilter,
  projectResponse, mergeRequest, filterOperatorsBlacklist,
  propertiesBlacklist, allowBulk*)
- Users (/users) with the userPwdHasher bcrypt interceptor; reader /
  writer roles authenticating via Basic + Bearer; wrong-password deny
- Sessions / multi-doc transactions (/_sessions/<id>/_txns/<txnid>)
  with commit and abort branches
- Auth services (/token form grants, JWT, Auth-Token, Digest,
  OAuth metadata under /.well-known/oauth-*)
- Diagnostics (/ping, /metrics in json/prometheus/openmetrics, per-db
  and per-coll, /health/db, OPTIONS preflight, gzip request encoding,
  Accept-Encoding negotiation)
- MongoMountResolver (multiple databases, encoded collection names,
  root /_size and /_meta, trailing-slash and double-slash variants)

restheart_bootstrap now PUTs every collection record-traffic touches.
README.md describes the sample as a complete keploy compat lane
sample and lists every surface it exercises.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@AkashKumar7902 AkashKumar7902 changed the title feat(restheart-mongo): keploy compat lane sample (scaffold) feat(restheart-mongo): keploy compat lane sample May 1, 2026
Adds .github/workflows/restheart-mongo.yml plus the helper
.github/workflows/scripts/run-and-measure.sh, modeled on the
doccano-django sample's coverage gate.

* paths-scoped trigger: pull_request and push-to-main both filter
  on `restheart-mongo/**` and `.github/workflows/restheart-mongo.yml`,
  so changes to other samples in this repo do not trigger this
  workflow (and vice versa).
* Three jobs: build-coverage (PR HEAD), release-coverage (PR base
  with first-PR bootstrap escape hatch), and coverage-gate that
  fails the PR if coverage drops more than COVERAGE_THRESHOLD
  percentage points (default 1.0pp, override via repo variable
  RESTHEART_COVERAGE_THRESHOLD).
* Helper script brings the sample up via its own
  docker-compose.yml, waits for the RESTHeart listener (treating
  both 200 and 401 as ready since `/` requires auth), runs
  flow.sh bootstrap → record-traffic → coverage, and emits the
  parsed percentage onto $GITHUB_OUTPUT for the gate job.
* Isolated from the enterprise lane: the enterprise PR pipeline
  (.woodpecker/restheart-linux.yml) calls `flow.sh coverage` only
  informationally and does not gate on it. The gate lives only
  here, on the sample repo, so coverage regressions surface on
  PRs that touch this sample without coupling enterprise CI to
  the route table.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
build-coverage on PR #134 hung 8 min when restheart never bound
on port 8080 (last_code=000). The helper script silently looped
through both the wait and flow.sh bootstrap timers, then the gate
job aborted without surfacing why restheart didn't start. Adding
an explicit fail-fast + docker logs dump after the 240s wait so a
future failure surfaces the restheart Java traceback (or the
mongo connection error, or whatever else).

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
Replaces the prior API-route-surface "coverage" (counting fired
routes / curated route table) with actual JaCoCo line coverage
of the RESTHeart 9.x JVM under traffic.

Architecture:
  - `Dockerfile.coverage` is a multi-stage build: stage 1 (alpine)
    fetches JaCoCo 0.8.13 (jacocoagent.jar + jacococli.jar), stage
    2 layers them into the upstream restheart image (which is
    distroless — no shell, no curl, so jars must be pulled in a
    builder stage and COPY'd over).
  - `docker-compose.coverage.yml` is an OVERLAY: applied via `-f
    docker-compose.yml -f docker-compose.coverage.yml`. It sets
    JAVA_TOOL_OPTIONS=-javaagent:.../jacocoagent.jar=output=tcpserver,...
    so JaCoCo attaches at JVM start and listens on port 6300.
    The base `Dockerfile` and `docker-compose.yml` are untouched,
    so keploy/integrations and keploy/enterprise CI lanes consume
    the base compose and pay zero JaCoCo cost (the agent rewrites
    bytecode at class-load, adding ~5-10% per-call overhead that
    would slow record/replay).
  - `flow.sh::restheart_report_coverage` shells into a one-off
    coverage container to dump execution data via JaCoCo TCP and
    render an XML report against /opt/restheart/restheart.jar.
    When called against the base image (no overlay) it prints
    "INFO: ... uninstrumented" and exits 0 so enterprise lanes'
    `flow.sh coverage || true` informational calls keep working.

Also fixes a pre-existing config bug in the base
docker-compose.yml's RHO env var: the override syntax uses ';'
as a key->value separator (the upstream image's default
RHO uses ';'); the prior YAML-folded version used ',' which
RESTHeart parsed as part of the connection-string value, leading
RESTHeart to ignore the override and bind /http-listener/host to
its localhost default — making the HTTP listener unreachable
from the host port mapping. The base compose now uses ';' AND
explicitly overrides /http-listener/host -> "0.0.0.0".

Removed:
  - `restheart_list_routes` (curated route table denominator).
  - `restheart_list_recorded_routes` (keploy-tests / fired-routes
    reader).
  - The legacy route-surface `restheart_report_coverage` body.
  - `list-routes` subcommand.

Validated locally: helper produced `coverage=52.3` to
GITHUB_OUTPUT against a clean stack (1663/3182 lines covered
in restheart.jar; INSTRUCTION coverage 50.8%).

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 1, 2026

restheart-mongo sample coverage

ref coverage
base (main) 0.0%
this PR 53.1%

Threshold: PR may not drop coverage by more than 1.0pp. Override per-repo via the RESTHEART_COVERAGE_THRESHOLD actions variable.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
…RED_ROUTES refs

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@AkashKumar7902 AkashKumar7902 changed the title feat(restheart-mongo): keploy compat lane sample feat(restheart-mongo): keploy compat lane sample + Java line coverage gate May 1, 2026
…terministic replay

Two record/replay determinism fixes the full keploy/enterprise compat
matrix needs to go green:

1) RHO `/jwtConfigProvider/key` pinned to a fixed string. Default is
   `key: null` which makes RESTHeart auto-generate a random HS256 secret
   per container start. Recorded JWT bearers carry an HS256 signature
   over the payload using that secret, so a fresh-container replay phase
   rejects the recorded bearer with 401 even though --freezeTime keeps
   `exp` valid. Pinning the secret keeps the bearer signature verifiable
   across record→replay container restarts.

2) keploy.yml.template rewritten to use NESTED globalNoise format
   (`body: { field: [] }`) instead of flat dotted keys (`body.field: []`).
   Keploy's matcher reads `globalNoise.global` as map[section][field]regex
   and treats dotted keys as literal outer keys, never matching the body
   section. Verified by walking pkg/matcher/http/match.go and the
   employee-manager sample's commented example. Fields added:
   header.{Date, Content-Length, Auth-Token},
   body.{_etag, _oid, _id, lastModified, client_ip, latencyMs, access_token}.

Validated locally with all three matrix cells (record-stable-replay-pr,
record-pr-replay-pr, record-pr-replay-stable) — each reaches 296/296
PASSED with these two changes plus the lane-side --port and --freezeTime
flags already in keploy/enterprise#1889.
RESTHeart 9.x's default JVM uses MaxRAMPercentage=25, which on a
typical Woodpecker runner cgroup (~8GB) lands ~2GB heap. With the
keploy/enterprise compat matrix running 3 restheart cells concurrent
alongside parse-server / umami / doccano cells (12 lane-cells total
on one runner), aggregate memory pressure triggers cgroup OOM and
the kernel SIGKILLs lighter processes (bash, curl) mid-bootstrap.

Observed in keploy/enterprise pipeline 3721/26 (record-stable-replay-pr):
RESTHeart came up cleanly (`127.0.0.1:18070 is now reachable`), then
`bash flow.sh bootstrap 240` got SIGKILL'd (exit 137) before any
seed PUTs landed. Same shape across all 3 restheart cells in 3721.

Cap heap at -Xmx512m (start at -Xms128m). Local single-cell
record/replay peaks at ~280MB heap, so 512m is comfortable headroom
without bleeding into neighbours. With the bound, each cell's
footprint stays under ~700MB (heap + native + mongo); 3 restheart
cells fit alongside other lanes on a shared runner.

Validated locally: cell record-pr-replay-pr 296/0 PASSED with the
new JAVA_TOOL_OPTIONS in place. RESTHeart logs confirm
"Picked up JAVA_TOOL_OPTIONS: -Xms128m -Xmx512m ..." at startup.
…:` at curl call sites

flow.sh's 214 admin-authenticated curl invocations passed the credential
as `-H "$RESTHEART_ADMIN_AUTH"` where RESTHEART_ADMIN_AUTH defaults to
`Basic YWRtaW46c2VjcmV0` (the credential value alone, no header name).
curl renders that as a header value with no name and the request
arrives at RESTHeart with no Authorization header, producing 401.

Effect on the lane:
  - bootstrap's `PUT /<db>/<coll>` seed-collection writes all 401'd
    silently (each is `|| true`'d), so the seed collections were
    NEVER created. record-traffic ran against an empty mongo.
  - record-traffic's 214 admin-authenticated calls all 401'd. The
    recorded testcases are 256 × 401 (out of 296), 25 × 404, and
    only 9 × 200 (the public /ping and OPTIONS preflights).
  - Lane still passed-by-equality because both record and replay
    fail identically (the 401 wire bytes match), but the lane was
    not actually exercising RESTHeart's admin surface.

Fix: change the call site, not the env. `-H "$RESTHEART_ADMIN_AUTH"`
→ `-H "Authorization: $RESTHEART_ADMIN_AUTH"`. The env var stays a
plain credential ("Basic YWRtaW46c2VjcmV0"), the curl line constructs
the full HTTP header. 214 sites swapped via:
  sed -i 's|-H "\$RESTHEART_ADMIN_AUTH"|-H "Authorization: $RESTHEART_ADMIN_AUTH"|g'

Bare-smoke validation (docker compose up + flow.sh bootstrap +
flow.sh record-traffic against vanilla restheart 9.2.1):

  before:  256 × 401   25 × 404    9 × 200
  after:   110 × 200   59 × 201   49 × 404   33 × 405   25 × 204
            16 × 401   11 × 400    8 × 403    4 × 409    3 × 500

The 16 remaining 401s are the genuinely intentional ones — bogus
JWT bearer probe, malformed Digest, no-auth liveness, and similar.
The 8 × 403 are real ACL-deny coverage that was previously
unreachable behind the universal 401.
…e-freeze compat

After fixing the Authorization-header-prefix bug in flow.sh (parent
commit), 214 admin-auth calls that previously 401'd now actually
mutate state and reach RESTHeart's normal write/read path. That
exposed two new categories of record/replay divergence the all-401
recording had hidden:

1) Etag header: RESTHeart auto-stamps a hash on every doc / collection
   response. body._etag is already in globalNoise, but the same hash
   also appears as a response header (Etag) — needs to be noised
   separately. Without it, ~85 of 306 tests fail on header.Etag diff.

2) Location header: POST /_sessions returns Location: /_sessions/<uuid>
   where the UUID is server-generated. 2 tests fail without noising.

Both added to keploy.yml.template's globalNoise.global.header.

Separate fix on the compose side: disable mongoAclAuthorizer's cache
(`/mongoAclAuthorizer/cache-enabled->false` via RHO). Default is
cache-enabled with TTL=5_000ms, backed by Caffeine. Caffeine's
internal ticker uses System.nanoTime, which keploy's --freezeTime
LD_PRELOAD shim intercepts — so at replay the wall-clock-bound TTL
never expires. flow.sh's `sleep 6` between `POST /acl` and the
writer-permission test works at record (clock advances, cache reloads
the new rule) but is ineffective at replay (cache stuck on whatever
it loaded first). Result: writer gets 403 at replay despite recorded
200. Disabling the cache makes ACL rules read-through from mongo on
every request — small perf cost in the test environment but
eliminates the time-freeze × cache-TTL interaction.

Local cell record-pr-replay-pr verified 306/306 PASSED with both
fixes in place. Test count went 296 → 306 (10 additional tests
captured because admin operations now mutate state and unlock paths
that previously 404'd).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants